With Dragonfly's Deep Learning Tool even non-experts in image processing and artificial intelligence can create robust and reproducible segmentation results by training a deep model for semantic segmentation. You simply need to label features on a training set, train the model, and then let Dragonfly do the tedious segmentation, saving you time and minimizing user bias.
The following videos provide an introduction to training deep models for semantic segmentation.
Training a deep model for segmenting fabric fibers (23:16):
(https://www.youtube.com/watch?v=1WVlskyuw94).
Training a deep model for segmenting multiple material phases (11:44):
(https://www.youtube.com/watch?v=Sl6vv51T7Mg).
The following items are required for training a deep model for semantic segmentation:
You should note that a selection of untrained models suitable for binary and multi-class segmentation are supplied with the Deep Learning Tool (see Generating Models for Semantic Segmentation). You can also download models from the Infinite Toolbox (see Infinite Toolbox), or import models from Keras.
The following items are optional for training a deep model for semantic segmentation:
Multi-ROIs that are used as the target output for semantic segmentation must meet the following conditions:
Note Applying a mask may limit the number of input patches that are processed (see Applying Masks).
As shown in the illustration below, three distinct material phases in a training dataset were labeled as separate classes. You should note that labeling can be done directly on a multi-ROI as of Dragonfly version 2020.1 (see Multi-ROI Classes and Labels). You can also choose to work on multiple regions of interest, from which you can create a multi-ROI (see Creating Multi-ROIs from Regions of Interest).
Multi-class labeling
Dragonfly's Deep Learning Tool provides a number of deep models — including U-Net, DeepLabV3+, FC-DenseNet, and others — that are suitable for binary and multi-class semantic segmentation. Semantic segmentation is the process of associating the voxels of an image with a class label.
The Deep Learning Tool dialog appears.
The Model Generator dialog appears (see Model Generator for additional information about the dialog).
This will filter the available architectures to those recommended for segmentation.

Note A description of each architecture is available in the Architecture Description box, along with a link for more detailed information.
Recommendation U-Net models are often the easiest to train and produce good results in many cases. For a more powerful option, you might consider Sensor3D, which uses a 3D context.

For example, if you training set multi-ROI has four classes, then you must enter a Class Count of 4.

After processing is complete, a confirmation message appears at the bottom of the dialog.
Information about the loaded model, as well as a graph view of the data flow, appears in the dialog (see Model Information and Graph View).
You can start training a supported model for semantic segmentation after you have prepared your training input(s) and output(s), as well as any required masks (see Prerequisites).
To open the Deep Learning Tool, choose Artificial Intelligence > Deep Learning Tool on the menu bar.
Information about the model appears in the Model information box (see Model Information), while the Graph view shows the data flow through the model (see Graph View).
Note In most cases, you should be able to train a multi-class segmentation model supplied with the Deep Learning Tool as is, without making changes to its architecture.
The Model Training panel appears (see Model Training Panel).
Note If you chose to train your model in 3D, then additional options will appear for the input, as shown below. See Configuring Multi-Slice Inputs for information about selecting reference slices and spacing values.

Note If your model requires multiple inputs, select the additional input(s), as required (see Training with Multiple Inputs).

Note Only multi-ROIs with the number of classes corresponding to model's class count will be available in the menu.
The completed Training Data should look something like this:

Note If you are training with multiple training sets, click the Add New
button and then choose the required input(s), output, and mask for the additional item(s).

See Basic Settings for information about choosing an input (patch) size, epochs number, loss function, optimization algorithm, and so on.

Note You should monitor the estimated memory ratio when you choose the training parameter settings. The ratio should not exceed 1.00 (see Estimated Memory Ratio).
You should note that this step is optional and that these settings can be adjusted after you have evaluated the initial training results.
You can monitor the progress of training on the Progress bar, as shown below.

Recommendation During training, the quantity 'val_categorical_accuracy' should increase, while 'val_loss' should decrease. You should continue to train until 'val_loss' stops decreasing. In most cases, you should increase the number of significant digits to make sure that small changes to monitored quantities can be noted (see Selecting the Views Preferences).
When training is complete or is stopped, the Training Results dialog appears.
You can also generate previews of the original data or test set to further evaluate the model (see Generating Previews).
Note If your results continue to be unsatisfactory, you might consider changing the selected model.
After training is complete or stopped, you can evaluate the results in the Training Results dialog, as well as generate previews of a test set or the original data to further evaluate the predictive model (see Generating Previews).
The Training Results dialog, shown below, appears automatically after training is complete or stopped.
Training Results dialog
You can do the following to evaluate the training results: